MetaNODE: Prototype Optimization as a Neural ODE for Few-Shot Learning

نویسندگان

چکیده

Few-Shot Learning (FSL) is a challenging task, i.e., how to recognize novel classes with few examples? Pre-training based methods effectively tackle the problem by pre-training feature extractor and then predicting via cosine nearest neighbor classifier mean-based prototypes. Nevertheless, due data scarcity, prototypes are usually biased. In this paper, we attempt diminish prototype bias regarding it as optimization problem. To end, propose meta-learning framework rectify prototypes, introducing meta-optimizer optimize Although existing meta-optimizers can also be adapted our framework, they all overlook crucial gradient issue, estimation biased on sparse data. address regard its flow meta-knowledge Neural Ordinary Differential Equation (ODE)-based polish called MetaNODE. meta-optimizer, first view initial model process of continuous-time dynamics specified ODE. A inference network carefully designed learn estimate continuous for dynamics. Finally, optimal obtained solving Extensive experiments miniImagenet, tieredImagenet, CUB-200-2011 show effectiveness method.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Few-Shot Learning with Graph Neural Networks

We propose to study the problem of few-shot learning with the prism of inference on a partially observed graphical model, constructed from a collection of input images whose label can be either observed or not. By assimilating generic message-passing inference algorithms with their neural-network counterparts, we define a graph neural network architecture that generalizes several of the recentl...

متن کامل

Few-shot Learning

Though deep neural networks have shown great success in the large data domain, they generally perform poorly on few-shot learning tasks, where a classifier has to quickly generalize after seeing very few examples from each class. The general belief is that gradient-based optimization in high capacity classifiers requires many iterative steps over many examples to perform well. Here, we propose ...

متن کامل

Imitation networks: Few-shot learning of neural networks from scratch

In this paper, we propose imitation networks, a simple but effective method for training neural networks with a limited amount of training data. Our approach inherits the idea of knowledge distillation that transfers knowledge from a deep or wide reference model to a shallow or narrow target model. The proposed method employs this idea to mimic predictions of reference estimators that are much ...

متن کامل

Prototypical Networks for Few-shot Learning

A recent approach to few-shot classification called matching networks has demonstrated the benefits of coupling metric learning with a training procedure that mimics test. This approach relies on an attention scheme that forms a distribution over all points in the support set, scaling poorly with its size. We propose a more streamlined approach, prototypical networks, that learns a metric space...

متن کامل

A Unified approach for Conventional Zero-shot, Generalized Zero-shot and Few-shot Learning

Prevalent techniques in zero-shot learning do not generalize well to other related problem scenarios. Here, we present a unified approach for conventional zero-shot, generalized zero-shot and few-shot learning problems. Our approach is based on a novel Class Adapting Principal Directions (CAPD) concept that allows multiple embeddings of image features into a semantic space. Given an image, our ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Proceedings of the ... AAAI Conference on Artificial Intelligence

سال: 2022

ISSN: ['2159-5399', '2374-3468']

DOI: https://doi.org/10.1609/aaai.v36i8.20885